F2 Longslit HK Reduction - When Things Go Wrong

An example of when things go wrong and what can be done about it

WARNING: This example shows a few tricks to try and to look for when things go wrong. There are a few problems with this dataset. In particular, the telluric is far from optimal (wrong airmass, taken much later after the observation). The wavelength calibration also shows problem.

This is a study example, not even close to a smooth, normal reduction.


Science Dataset:

Observation UT date 2013 Oct 26, \*2013 Oct 24
Data filename prefix S20131026S, \*S20131024S

**File numbers:**
Science 681-684 (HK, HK, 2pix-slit, 90s)
Darks for science 797-803 (90s)
Flat 686 (HK, HK, 2pix-slit, 4s)
Darks for flat \*032-037 (4s)
Arc 685 (HK, HK, 2pix-slit, 90s)
Darks for arc 797-803 (90s)

Telluric Dataset:

WARNING: This is the wrong telluric. Do not use for science. But it is a heck of a good training set for when things go wrong. <br >

Name: HIP 1378
Spectral Type: B9V
Temperature: $10700 K$
Airmass: 1.848
Airmass Science: 1.465
Delta Time: +50 min
Observation UT date 2013 Oct 26, \*2013 Oct 24, \*\*2013 Oct 25
Data filename prefix S20131026S, \*S20131024S, \*\*S20131025S

File numbers:

Telluric (HIP 1378) 699-702 (HK, HK, 2pix-slit, 30s)
Darks for telluric \*\*401-406 (30s)
Flat 686 (HK, HK, 2pix-slit, 4s)
Darks for flat \*032-037 (4s)
Arc 685 (HK, HK, 2pix-slit, 90s)
Darks for arc 797-803 (90s)

STEP 0: Define variables specific to this target and start parallel PyRAF session.


In [1]:
import os
redux_dir = '/Volumes/Rugged2/GS-2013B-Q-73/SDSSJ005918.23+002519.7/test_notebook/reduxHK'
os.chdir(redux_dir)

logfile = "HK005918.log"
database = "HK005918_database/"
raw_data_path = "../../../raw/"
flatroot = "S20131026S" ; flats = "686"
flatdarkroot = "S20131024S" ; flatdarks = "032-037"
arcroot = "S20131026S" ; arcs = "685"
arcdarkroot = "S20131026S" ; arcdarks = "797-803"
objroot = "S20131026S" ; objs = "681-684"
objdarkroot = "S20131026S" ; objdarks = "797-803"
telroot = "S20131026S" ; tels = "699-702"
teldarkroot = "S20131025S" ; teldarks = "401-406"

lxorder=3 ; lyorder=6  # used by nsfitcoords.  
                       # Values are determine in Step 8.


---------------------------------------------------------------------------
OSError                                   Traceback (most recent call last)
<ipython-input-1-3b7ad6f02e34> in <module>()
      1 import os
      2 redux_dir = '/Volumes/Rugged2/GS-2013B-Q-73/SDSSJ005918.23+002519.7/test_notebook/reduxHK'
----> 3 os.chdir(redux_dir)
      4 
      5 logfile = "HK005918.log"

OSError: [Errno 2] No such file or directory: '/Volumes/Rugged2/GS-2013B-Q-73/SDSSJ005918.23+002519.7/test_notebook/reduxHK'

Now get a PyRAF session ready. This will be needed for some interactive tasks that don't work well in the notebook.

Open an xtern or a Terminal, then:

cd '/Volumes/Rugged2/GS-2013B-Q-73/SDSSJ005918.23+002519.7/test_notebook/reduxHK'
pyraf

Then in the PyRAF session:

gemini
f2
unlearn gemini
unlearn f2
unlearn gnirs
unlearn gemtools

Get your PyRAF configured. In the PyRAF session:

iraf.f2.logfile = "HK005918.log"
iraf.f2.database = "HK005918_database/"
set rawdir = "../../../raw/"
set stdimage=imt2048
nsheaders('f2', logfile=iraf.f2.logfile)

STEP 1: Initialize the required packages

Launch ds9 before proceeding further.

Load the packages required for the notebook session.


In [ ]:
from pyraf import iraf
iraf.gemini()
iraf.f2()

Reset tasks to the default parameters. (Note: this doesn't seem to be working from the Python shell.)


In [ ]:
iraf.unlearn(iraf.gemini, iraf.f2, iraf.gnirs, iraf.gemtools)

STEP 2: Define various parameters and variables.


In [ ]:
iraf.f2.logfile = logfile
iraf.f2.database = database
rawdir = raw_data_path
iraf.set(stdimage='imt2048')

iraf.nsheaders('f2', logfile=iraf.f2.logfile)

If necessary to start from scratch, delete the log file and the database.


In [ ]:
if (iraf.access(iraf.f2.logfile)):
    iraf.delete(iraf.f2.logfile, verify='no')
if (iraf.access(iraf.f2.database)):
    iraf.delete(iraf.f2.database + '*', verify='no')

STEP 3: Create the reduction lists


In [ ]:
iraf.delete('flat.lis, flatdark.lis, arc.lis, arcdark.lis, obj.lis, objdark.lis, \
    tel.lis, teldark.lis', verify='no')

iraf.gemlist(flatroot, flats, Stdout='flat.lis')
iraf.gemlist(flatdarkroot, flatdarks, Stdout='flatdark.lis')
iraf.gemlist(arcroot, arcs, Stdout='arc.lis')
iraf.gemlist(arcdarkroot, arcdarks, Stdout='arcdark.lis')
iraf.gemlist(objroot, objs, Stdout='obj.lis')
iraf.gemlist(objdarkroot, objdarks, Stdout='objdark.lis')
iraf.gemlist(telroot, tels, Stdout='tel.lis')
iraf.gemlist(teldarkroot, teldarks, Stdout='teldark.lis')

iraf.concat('flat.lis, flatdark.lis, arc.lis, arcdark.lis, obj.lis, objdark.lis, \
    tel.lis, teldark.lis', 'all.lis')

# remove duplicates
all_file = open('all.lis', 'r')
all_lines = all_file.readlines()
all_file.close()
lines_seen = list()
for line in all_lines:
    if line not in lines_seen:
        lines_seen.append(line)
all_file = open('all.lis', 'w')
all_file.writelines(lines_seen)
all_file.close()

STEP 4: Visually inspect the data


In [ ]:
all_file = open('all.lis', 'r')
for line in all_file:
    image = line.strip() + '[1]'
    print image
    iraf.display(rawdir + image, 1)
    iraf.sleep(5)
    
all_file.close()

STEP 5: f2prepare all the data


In [ ]:
iraf.imdelete('f@all.lis', verify='no')
iraf.f2prepare('@all.lis', rawpath=rawdir, fl_vardq='yes', fl_correct='yes', \
               fl_saturated='yes', fl_nonlinear='yes')

STEP 6: Create the necessary dark frames


In [ ]:
iraf.delete('fflatdark.lis', verify='no')
iraf.imdelete('flatdark.fits', verify='no')
iraf.sections('f@flatdark.lis', Stdout='fflatdark.lis')
iraf.gemcombine('@fflatdark.lis', 'flatdark.fits', combine='average', \
    fl_vardq='yes', logfile=iraf.f2.logfile)

iraf.delete('farcdark.lis', verify='no')
iraf.imdelete('arcdark.fits', verify='no')
iraf.sections('f@arcdark.lis', Stdout='farcdark.lis')
iraf.gemcombine('@farcdark.lis', 'arcdark.fits', combine='average', \
    fl_vardq='yes', logfile=iraf.f2.logfile)

iraf.delete('fobjdark.lis', verify='no')
iraf.imdelete('objdark.fits', verify='no')
iraf.sections('f@objdark.lis', Stdout='fobjdark.lis')
iraf.gemcombine('@fobjdark.lis', 'objdark.fits', combine='average', \
    fl_vardq='yes', logfile=iraf.f2.logfile)

iraf.delete('fteldark.lis', verify='no')
iraf.imdelete('teldark.fits', verify='no')
iraf.sections('f@teldark.lis', Stdout='fteldark.lis')
iraf.gemcombine('@fteldark.lis', 'teldark.fits', combine='average', \
    fl_vardq='yes', logfile=iraf.f2.logfile)

STEP 7: Create the normalized flat field and BPM.

Subtract the dark from the flat images then cut


In [ ]:
iraf.imdelete ('df@flat.lis', verify='no')

file = open('flat.lis', 'r')
for line in file:
	image = line.strip()
	iraf.gemarith ('f' + image, '-', 'flatdark.fits', 'df' + image, \
		fl_vardq='yes', logfile=iraf.f2.logfile)

file.close()

iraf.imdelete ('cdf@flat.lis', verify='no')
iraf.f2cut ('df@flat.lis')

Construct the normalised flat field.

The flats are derived from images taken with the calibration unit (GCAL) shutter open ("lamps-on"). It is recommended to run nsflat interactively to ensure a good fit of the lamp's profile.

The fit order must be high enough to fit the lamp's profile but not too high since we do not want to fit the variations due to the pixel to pixel variation. For the HK filter, the fit should look like in the screenshots below. Note that in the zoomed plot, the fit matches the overall profile, not the high frequency variations.

The overall fit:

<br > Zooming in to confirm that the high frequency signal is not being fit:

If the order needs to be changed, the :order command will be required and that does not work well in the iPython notebook. So copy the following to the PyRAF we prepared earlier, and find a good fit.

iraf.imdelete('flat.fits,f2_ls_bpm.pl', verify='no')
iraf.nsflat('cdf@flat.lis', flatfile='flat.fits', \
    bpmfile='f2_ls_bpm.pl', thr_flo=0.35, thr_fup=3.0, \
    fl_inter='yes', order=120)

STEP 8: Reduce the arc and determine the wavelength solution

The wavelength solution is derived from an arc taken right after the science observation, before anything is moved (eg. grating, slit, etc.)

If we are lucky, the arc will also work for the telluric observation. However, as we will see later for this target, when things in the optical path are moved, the arc might not be work at all.

If the optical path moved such that the science arc solution cannot be applied, and no arcs were taken with the telluric, one might be able to use OH sky lines in the telluric. The exposure would have to be long enough to get good signal on the OH lines. The coverage of the OH lines is not as complete as the arc, but it might be the only solution.

In any case, we will reduce the arc and calculate the wavelength solution as this is what we surely be using for the science observations.

The wavelength solution step really should be done in interactive mode as it is not uncommon to have to reject lines and/or adjust the matches with the line list.

Reduce the arc


In [ ]:
# Subtract the dark from the arc images prior to cutting and flat dividing.

iraf.imdelete('df@arc.lis', verify='no')
iraf.nsreduce('f@arc.lis', outprefix='d', fl_cut='no', fl_process_cut='no', \
    fl_dark='yes', darkimage='arcdark.fits', fl_sky='no', fl_flat='no')

# Cut the arc images and divide by the normalised flat field image.

iraf.imdelete('rdf@arc.lis', verify='no')
iraf.nsreduce('df@arc.lis', fl_cut='yes', fl_dark='no', fl_sky='no', fl_flat='yes', \
    flatimage='flat.fits')

# Combine the arc files (if there is more than one arc file)

iraf.imdelete('arc.fits', verify='no')
iraf.delete('rdfarc.lis', verify='no')
iraf.sections('rdf@arc.lis//.fits', Stdout='rdfarc.lis')

count = 0
file = open('arc.lis', 'r')
for line in file:
	count += 1

if count == 1:
	iraf.copy ('@rdfarc.lis', 'arc.fits')
else:
	iraf.gemcombine ('@rdfarc.lis', 'arc.fits', fl_vardq='yes')

file.close()
print 'Done'

Determine the wavelength solution from the arc

Normally, the lines will be identified automatically and this step, while interactive is used to delete a couple outliers and visually confirm that the fit is good.

If, for some reason (it can happen), the lines do not get identified automatically and they need to be marked manually, you will need to run the two commands below in the PyRAF sessions since you will need full interactive support. But most of the time, running this in the notebook works okay.


In [ ]:
iraf.imdelete ('warc.fits', verify='no')
iraf.nswavelength ('arc.fits', fl_inter='yes')

Test recovery of a calibrated arc

This step is to confirm that if we apply the wavelength solution to the arc itself, we do recover a correctly wavelength-calibrated arc. The key step is nsfitcoords. The default lxorder and lyorder are both set to 2 which doesn't seem to be correct. Running nsfitcoords interactively on the arc will allow us to discover the best lxorder and lyorder for this wavelength solution, and use that as a starting point later when using the arc wavelength solution (warc.fits) on the telluric and the science.

Interactive IRAF tasks don't work very well from the iPython Notebook. This is a case where we need to use PyRAF directly. Cut and paste these instructions in the PyRAF session that we prepared in Step 0.

iraf.imdelete('farc.fits', verify='no')
iraf.nsfitcoords('arc.fits', lamptransf='warc.fits', fl_inter='yes')
# Note down the xorder and yorder needed for a good fit.  Use
# that whenever warc.fits is used by nsfitcoords.
# :xorder 3, :yorder 6

The default xorder and yorder are 2 and 2. Below is the residual plot one gets with those defaults: the residuals are large and show a pattern, clearly indicating that the fit is not good at all.

<br > Adjusting the xorder to 3, tightens the x-residuals.

<br > Finally, adjusting the yorder to 6, reduce the y-residuals within acceptable limits and remove the big wave pattern.

Record the lxorder and lyorder in the notebook session for future use.


In [ ]:
lxorder=3 ; lyorder=6

Now continue with the test.


In [ ]:
iraf.imdelete('tfarc.fits', verify='no')
iraf.nstransform('farc.fits')
iraf.display('tfarc.fits[sci,1]', 1)

# display the first telluric to identify on which column the spectrum falls in
# the A position.
first_tel = iraf.head('tel.lis', nlines=1, Stdout=1)[0].strip()
iraf.display('f'+first_tel+'[1]', 1)
# position of the telluric is column 928
tel_position = 928

iraf.splot('tfarc.fits[sci,1]', line=iraf.tel_position, options="flip")

Check that the arc lines are at the correct wavelength. Use 'k' on each side of the a line to get the centroid. Then compare with the line list and the diagrams.

Resource:
http://www.gemini.edu/sciops/instruments/niri/?q=node/10166

The NIRI plots from that webpage work well.

Result: (Kathleen) The arc gets transformed correctly with lxorder=3, lyorder=6. The lines are within 1-2 Angstroms of the expected values. Note that the shape of the arc lines is not symmetric. No idea why, but it surely affects centering. Bottom line is that with the proper lxorder and lyorder, the wavelength solution leads to a good calibration.

STEP 9: Reduce the telluric standard dataset


In [ ]:
iraf.imdelete('df@tel.lis', verify='no')
iraf.nsreduce('f@tel.lis', outprefix='d', fl_cut='no', fl_process_cut='no', \
    fl_dark='yes', darkimage='teldark.fits', fl_sky='no', fl_flat='no')

iraf.imdelete('rdf@tel.lis', verify='no')
iraf.nsreduce('df@tel.lis', fl_cut='yes', fl_dark='no', fl_sky='yes', \
    fl_flat='yes', flatimage='flat.fits')

STEP 10: Combine the reduced telluric datasets


In [ ]:
iraf.imdelete('tel_comb.fits', verify='no')
iraf.nscombine('rdf@tel.lis', output='tel_comb.fits', fl_shiftint='no', fl_cross='yes')

iraf.display('tel_comb.fits[SCI,1]', 1)

STEP 11: Wavelength calibrate the telluric data

As it turns out, this step can be tricky. Theorically, the arc taken with the science should be fine if the telluric is taken just after the science and the optical path is identical. Unfortunately, we have found that the optical path can vary, leading to the telluric being offset from the science. This effect was indeed observed for this target. Below, we document the whole process that led to the discovery of the problem and to the solution. Note that from now on (circa Sep 2014), an arc will be observed with the telluric; Ruben Diaz has modified the OT library accordingly.

We first (Option 1) use the arc that was taken with the science and inspect the results. Then (Option 2)we look at the OH sky lines to investigate the problem and measure the offset. That offset is applied to wavelength solution and that translated solution is applied to the telluric. We find that the results are good. Finally (Option 3), since the sky lines are quite strong, we use the sky lines to calculate a new wavelength solution and use that solution on the telluric. We find that the results are very good, and that will be the solution used for science.

The key tasks used for this step are:

  • (nswavelength - Option 3 only: Used to calculate a wavelength solution from the sky lines.)
  • nsfitcoords: Used to determine the final 2-D surface solution, based on a previously calculated wavelength solution, to be applied to the data.
  • nstransform: Actually applies the 2-D solution found by nsfitcoords to the data.

Note that spatial rectification (s-distortion correction) is not needed with F2 longslit data.

Option 1: Use the arc taken with the science.


In [ ]:
print 'Using lxorder, lyorder = ', lxorder, lyorder
iraf.imdelete('ftel_comb.fits', verify='no')
iraf.nsfitcoords('tel_comb.fits', lamptransf='warc.fits', \
                 lxorder=lxorder, lyorder=lyorder)

iraf.imdelete ('tftel_comb.fits', verify='no')
iraf.nstransform ('ftel_comb.fits')

Check that the spectral features (the hydrogen lines in this case) fall where they are supposed to be.

Cut an paste these instructions in the PyRAF session. (The splot interactive ':'-commands do not work well from the notebook.)

iraf.display ('tftel_comb[sci,1]', 1)
# note on which column the telluric falls
iraf.tel_position = 928

splot ("tftel_comb[sci,1]", line=iraf.tel_position)
# :nsum 30
# Use 'k' on each side of the a line to centroid.
# Compare with atonic line list.

In this case, one finds that the measured wavelengths of the hydrogen lines in the corrected telluric do NOT match the wavelengths in atomic line list. The wavelength solution calculated from the arc taken with the science observation CANNOT be used on the telluric.

The investigation of what's going on is found in Option 2, below.

Option 2: Use the sky lines to measure any offsets and apply offset to arc solution.

Measure the offset.

One telluric frame has strong enough sky lines. Therefore, we will reduce that frame on its own, but without sky subtraction.


In [ ]:
first_tel = iraf.head('tel.lis', nlines=1, Stdout=1)[0].strip()
iraf.imdelete('Xdf'+first_tel, verify='no')
iraf.nsreduce('df'+first_tel, outprefix='X', fl_cut='yes', fl_dark='no', \
              fl_sky='no', fl_flat='yes', flatimage='flat.fits')
iraf.display('Xdf'+first_tel+'[sci,1]', 1)
# confirm that the sky lines are strong enough.

Apply the science arc solution to the non-sky subtracted telluric frame.


In [ ]:
print 'Using lxorder, lyorder = ', lxorder, lyorder
iraf.imdelete('fXdf'+first_tel, verify='no')
iraf.nsfitcoords ('Xdf'+first_tel, lamptransf='warc.fits', \
                  lxorder=lxorder, lyorder=lyorder)

iraf.imdelete('tfXdf'+first_tel, verify='no')
iraf.nstransform('fXdf'+first_tel)

Use splot to measure the wavelength of the sky lines in the corrected telluric. Do this in the PyRAF session, not in the notebook.

iraf.splot ('tfXdf'+first_tel+'[sci,1]', line=750)
# line=750 is just somewhere in the middle.
# :nsum 30

Check that the OH sky lines are at the correct wavelength. Use 'k' on each side of the a line to get the centroid. Then compare with the plots and line lists.

Resources:
http://www.jach.hawaii.edu/UKIRT/astronomy/calib/spec_cal/oh.html http://www.jach.hawaii.edu/UKIRT/astronomy/calib/spec_cal/ohlines.html

What we find here is that there is a constant offset in wavelength (through H and K bands) of $+11\pm0.5\ \unicode[serif]{xC5}$ compared to the OH line list. Something in the optical path moved between the arc and the telluric observation.

Offset the arc wavelength solution and apply.

The idea now is to use the wavelength solution obtained from the arc, but shift it by $11\ \unicode[serif]{xC5}$ before applying it to the telluric.


In [ ]:
print 'Using lxorder, lyorder = ', lxorder, lyorder
iraf.imdelete('ftel_comb.fits', verify='no')
iraf.nsfitcoords('tel_comb.fits', lamptransf='warc.fits', \
                 lxorder=lxorder, lyorder=lyorder)

iraf.copy(iraf.f2.database+'fcftel_comb_SCI_1_lamp', \
          iraf.f2.database+'fcftel_comb_offset_SCI_1_lamp')
iraf.hedit('ftel_comb.fits[sci,1]', 'FCFIT1', \
           'ftel_comb_offset_SCI_1_lamp', \
           update='yes', verify='no')

Edit the fitcoords file to apply the offset to the solution.

  1. First go to the very last solution in the file.
  2. Change the name of the solution, on the begin line to ftel_comb_offset_SCI_1_lamp.
  3. Then, subtract $11\ \unicode[serif]{xC5}$ from the 9th line of the last solution in the file (start counting after the surface record). To make sure you got it, see the line identified in this screenshot:


In [ ]:
filename = iraf.f2.database+'fcftel_comb_offset_SCI_1_lamp'
!xterm -e "vi $filename"

Then apply that transformation to the combined telluric.


In [ ]:
iraf.imdelete ('tftel_comb.fits', verify='no')
iraf.nstransform ('ftel_comb.fits')

Use splot to measure the wavelength of the stellar features in the corrected telluric. Do this in the PyRAF session, not in the notebook.

iraf.splot ('tftel_comb[sci,1]', line=iraf.tel_position)
# line=iraf.tel_position is where the telluric falls. It should 
#        still be defined from previous investigation steps.
#        If not, just display the image and identify the column
#        the telluric falls on.
# :nsum 30

Option 2 is obviously less then ideal but it gives decent result. However, in our case, we have nice, strong OH sky lines in our telluric observations and we will use those to calculate the wavelength solution rather than using a translated arc solution. Moving to Option 3 then.

Option 3: Use the sky lines to calculate the wavelength solution.

Reduce the first telluric, keeping the OH sky lines. Note that if the OH sky lines were a bit faint, one could stack all the tellurics together, without shifts, and increase the signal-to-noise on the sky lines, then proceed with that image instead.

IMPORTANT: There are no OH sky lines redwards of $22750\ \unicode[serif]{xC5}$ and blueward of $15000\ \unicode[serif]{xC5}$ which means that the wavelength calibration in those regions will have reduced accuracy. In our case, that's fine, we don't have features of interest in that part of the spectrum for this source. If the lack of lines were a problem, one might want to consider Option 2 instead.


In [ ]:
first_tel = iraf.head('tel.lis', nlines=1, Stdout=1)[0].strip()
iraf.imdelete('Xdf'+first_tel, verify='no')
iraf.nsreduce('df'+first_tel, outprefix='X', fl_cut='yes', fl_dark='no', \
              fl_sky='no', fl_flat='yes', flatimage='flat.fits')
iraf.display('Xdf'+first_tel+'[sci,1]', 1)
# confirm that the sky lines are strong enough.

Determine the wavelength solution for the telluric using the OH sky lines.

IMPORTANT: Run this interactively and do delete the bad lines from the fit and re-fit. There are bad line identifications redward of $22750\ \unicode[serif]{xC5}$ and blueward of $15000\ \unicode[serif]{xC5}$. Remove all those bad lines.


In [ ]:
iraf.imdelete('wXdf'+first_tel+'.fits"]', verify='no')
iraf.nswavelength('Xdf'+first_tel+'.fits', 
                  coordlist='gemini$gcal/linelists/ohlines.dat', 
                  fl_inter='yes')

Use that new wavelength solution on the telluric.

Most likely the lxorder and lyorder should be same as for the arc solution given that this must be a function of the optics. I (Kathleen) did run nsfitcoords interactively in a PyRAF session and I confirm that lxorder 3 and lyorder 6 are required even with the OH line solution. No need to run it interactively anymore.


In [ ]:
iraf.imdelete('ftel_comb.fits', verify='no')
iraf.nsfitcoords('tel_comb.fits', lamptransf='wXdf'+first_tel+'.fits', \
                 fl_inter='no', lxorder=3, lyorder=6)

iraf.imdelete('tftel_comb.fits', verify='no')
iraf.nstransform('ftel_comb.fits')

Verify that the solution works well before proceeding further. In the PyRAF session:

iraf.splot('tftel_comb.fits[sci,1]', line=iraf.tel_position)
# :nsum 30
# Use 'k' to centroid the stellar feature, 
# ie. the hydrogen lines to make sure
# that they fall where they are supposed to.

STEP 12: Extract the telluric spectrum

We run nsextract in non-interactive mode since the telluric is quite bright. (Running it in interactive mode does confirm that the automatic aperture selection is coincident with the telluric spectrum.)


In [ ]:
iraf.imdelete('xtftel_comb.fits', verify='no')
iraf.nsextract('tftel_comb.fits', fl_apall='yes', fl_findneg='no', \
    fl_inter='no', fl_trace='yes')

iraf.splot ('xtftel_comb.fits[SCI,1]')

This telluric spectrum will be used later. It will then need to be corrected to remove the stellar features either by fitting Voight profiles on each hydrogen line and removing each feature, or by fitting a stellar model to it.

STEP 13: Reduce the science data


In [ ]:
iraf.imdelete('df@obj.lis', verify='no')
iraf.nsreduce('f@obj.lis', outprefix='d', fl_cut='no', fl_process_cut='no', \
    fl_dark='yes', darkimage='objdark.fits', fl_sky='no', fl_flat='no')

iraf.imdelete('rdf@obj.lis', verify='no')
iraf.nsreduce('df@obj.lis', fl_cut='yes', fl_dark='no', fl_sky='yes', \
    fl_flat='yes', flatimage='flat.fits')

STEP 14: Combine the science data

The rejtype is set to minmax in the example included in the Gemini IRAF package. Kathleen finds that for faint targets, that does not work at all. Therefore, here we set rejtype to none which seems to work well.


In [ ]:
iraf.imdelete('obj_comb.fits', verify='no')
iraf.nscombine('rdf@obj.lis', output='obj_comb.fits', fl_shiftint='no', \
    fl_cross='no', rejtype='none')

iraf.display('obj_comb.fits[SCI,1]', 1)

STEP 15: Wavelength calibrate the science data

It is assumed that the arc has been taken right after (or before) the science observations, without anything moving in-between, other than getting the arc in the field of view.

While the OH sky lines could be used, the coverage is not always adequate across the dispersion axis and the sky lines tend to be a bit broader than the arc lines. True, we used the OH sky lines for the telluric above, but that was because we could not use the arc with greater accuracy.

Yet, we will nevertheless double-check that the arc works well on the science observation. To that end, we will use a science frame that has not been sky subtracted yet, and apply the arc solution. We will measure the OH sky lines to confirm that they fall where they are supposed to fall. If this test passes, then we will apply the arc solution to the combined science frame from Step 14.

(It should pass. If it doesn't, something is really wrong with the data and one might have to use the OH sky lines for the wavelength calibration after all.)

The key tasks used for this step are:

  • nsfitcoords: Used to determine the final 2-D surface solution, based on a previously calculated wavelength solution, to be applied to the data.
  • nstransform: Actually applies the 2-D solution found by nsfitcoords to the data.

Spatial rectification (s-distortion correction) is not usually needed with F2 longslit data.

Test accuracy of the wavelength solution the sky lines.

We reduce the first science frame but without the sky subtraction.


In [ ]:
first_obj = iraf.head('obj.lis', nlines=1, Stdout=1)[0].strip()
iraf.imdelete('Xdf'+first_obj, verify='no')
iraf.nsreduce('df'+first_obj, outprefix='X', fl_cut='yes', fl_dark='no', \
              fl_sky='no', fl_flat='yes', flatimage='flat.fits')
iraf.display('Xdf'+first_obj+'[sci,1]', 1)
# confirm that the sky lines are strong enough.

We apply the arc solution on the frame with the sky lines.


In [ ]:
iraf.imdelete('fXdf'+first_obj, verify='no')
iraf.nsfitcoords ('Xdf'+first_obj, lamptransf='warc.fits', lxorder=3, lyorder=6)

iraf.imdelete('tfXdf'+first_obj, verify='no')
iraf.nstransform('fXdf'+first_obj)

Use splot to measure the wavelength of the sky lines in the corrected telluric. Do this in the pyraf session, not in the notebook.

first_obj = iraf.head('obj.lis', nlines=1, Stdout=1)[0].strip()
iraf.splot ('tfXdf'+first_obj+'[sci,1]', line=750)
# line=750 is just somewhere in the middle.
# :nsum 30
# Use 'k' again and sky line plots and lists to check position
#  of the sky lines.

Resources:
http://www.jach.hawaii.edu/UKIRT/astronomy/calib/spec_cal/oh.html
http://www.jach.hawaii.edu/UKIRT/astronomy/calib/spec_cal/ohlines.html

Verifications of the OH lines in the H-band section and the K-band section show that the lines do show up at the correct wavelength after the transformation. We can use the arc.

Apply the wavelength solution from the arc to the combined science frame.


In [ ]:
iraf.imdelete('fobj_comb.fits', verify='no')
iraf.nsfitcoords('obj_comb.fits', lamptransf='warc.fits', lxorder=3, lyorder=6)

iraf.imdelete('tfobj_comb.fits', verify='no')
iraf.nstransform('fobj_comb.fits')

STEP 16: Extract the science spectrum

For faint science spectra, the telluric can be used as a reference when extracting the science spectrum; set trace=tftel_comb.fits.

However, in this case, the science spectrum is strong enough to be traced.


In [ ]:
iraf.display('tfobj_comb.fits[sci,1]', 1)

For this extraction, one needs to:

  1. delete, 'd', the aperture automatically selected. It selects some random spike at the far left of the cross-section.
  2. mark, 'm', the real location of the science spectrum, around column 950.
  3. adjust with 'u' and 'l', the width of the aperture, it seems a bit too wide.

In [ ]:
iraf.imdelete('xtfobj_comb.fits', verify='no')
iraf.nsextract('tfobj_comb.fits', fl_apall='yes', fl_findneg='no', \
               fl_inter='yes', fl_trace='yes')

iraf.splot('xtfobj_comb.fits[SCI,1]')

STEP 17: Apply the telluric correction to the science spectrum

No matter what standard star you observed for the telluric correction, there will be stellar features in the spectrum. There are two ways to remove the stellar features from the telluric spectrum: 1) fit a line profile to and remove each stellar feature from the telluric spectrum; 2) use a stellar model, resample, scale, and subtract it from the telluric to remove the stellar features.

The stellar model option is possibly better, but whether it is necessary is still unclear (circa late-Sep 2014). The question is whether we need to fit a blackbody to the telluric to properly calibrate the science spectrum. (Depends a bit on the science, I (KL) think). nstelluric does fit the continuum of the telluric and of the science to scale the standard. Is this sufficient? TBD.

Note: The telluric star is a B9V star. (We have not been able to find a B9V model with lines in the near-IR but a A0V model should do just fine.)

For now, we use rmfeatureto fit a Voight profile to each hydrogen feature and remove it from the telluric spectrum. The procedure is a bit tedious because of the very clunky nature of rmfeature but it does work in the end, with a bit of patience.

Remove stellar features from telluric spectrum

The stellar features we have to deal with are hydrogen absorption lines. There's only one in the K-band, but there are quite a few in the H-band. We start from the red-end and make our way to the bluest hydrogen we can identify and fit.

There are two approaches to deal with the stellar features:

  1. manual removal by fitting Voight profiles to the features
  2. use of a stellar model.

Right now, since we do not have a procedure to resample and scale the stellar model, we will remove the features manually with a custom-made python scripts rmfeature. Note that there are tools in IRAF to do that type of removal, and also the deal with the stellar model, but I (Kathleen) have no idea what those are or how to use them.

Manual removal of features with Voight profile

The stellar features are (in $\unicode[serif]{xC5}$) : 21655[1], 17362[2], 16807[3], 16407[4], 16109[5], 15881[6], 15701[7], 15557[clean].

We are not correcting for 15192, 15261, 15342, 15439 as they are really lost in the noise/telluric absorption.


In [ ]:
!xterm -e 'rmfeature xtftel_comb.fits tel_temp1.fits'
# [1] fit 1097 to 1197

In [ ]:
!xterm -e 'rmfeature tel_temp1.fits tel_temp2.fits'
# [2] fit 569 to 625

In [ ]:
!xterm -e 'rmfeature tel_temp2.fits tel_temp3.fits'
# [3] fit 509 to 541

In [ ]:
!xterm -e 'rmfeature tel_temp3.fits tel_temp4.fits'
# [4] fit 462 to 491

In [ ]:
!xterm -e 'rmfeature tel_temp4.fits tel_temp5.fits'
# [5] fit 416 to 473   (not great, but okay.)

In [ ]:
!xterm -e 'rmfeature tel_temp5.fits tel_temp6.fits'
# [6] fit 394 to 412

In [ ]:
!xterm -e 'rmfeature tel_temp6.fits tel_temp7.fits'
# [7] fit 370 to 395

In [ ]:
!xterm -e 'rmfeature tel_temp7.fits tel_clean.fits'
# [clean] fit 349 to 370  (not great, but best fit I can get)

Creation of a blackbody

The telluric star is HIP 1378. It is a B9V star with an approximate temperature of $10,700 K$.

Resource for temperatures:
http://www.jach.hawaii.edu/UKIRT/astronomy/utils/temp.html

First, we figure out the start and end wavelength, and the sampling we require to match the science spectrum. Then we create the blackbody spectrum and convert it into a MEF file. This will allow us to keep propagating the variance plane.


In [ ]:
iraf.wspectext('xtfobj_comb.fits[sci,1]', 'xtfobj_comb.txt', header='no')
wstart = int(round(float(iraf.head('xtfobj_comb.txt', nlines=1, Stdout=1)[0].split()[0])))
wend = int(round(float(iraf.tail('xtfobj_comb.txt', nlines=1, Stdout=1)[0].split()[0])))
nsamples = int(iraf.hselect('xtfobj_comb.fits[sci,1]', 'NAXIS1', 'yes', Stdout=1)[0])

print 'Start wavelength: ', wstart
print 'End wavelength: ', wend
print 'Number of samples: ', nsamples

In [ ]:
iraf.imdelete('B9V_blackbody_sci.fits', verify='no')
iraf.mk1dspec('B9V_blackbody_sci.fits', ncols=nsamples, \
              wstart=wstart, wend=wend, temperature=10700)
iraf.splot('B9V_blackbody_sci.fits')

iraf.imdelete('B9V_blackbody_var.fits', verify='no')
iraf.imdelete('B9V_blackbody_dq.fits', verify='no')
from astropy.io import fits
import numpy as np
bb_sci = fits.open('B9V_blackbody_sci.fits')
bb_var = fits.HDUList()
bb_var.append(fits.ImageHDU())
bb_var[0].header = bb_sci[0].header
bb_var[0].data = np.zeros(bb_sci[0].data.size, dtype=np.float32)
bb_var.writeto('B9V_blackbody_var.fits')
bb_dq = fits.HDUList()
bb_dq.append(fits.ImageHDU())
bb_dq[0].header = bb_sci[0].header
bb_dq[0].data = np.zeros(bb_sci[0].data.size, dtype=np.float32)
bb_dq.writeto('B9V_blackbody_dq.fits')
bb_sci.close()

iraf.imdelete('B9V_blackbody.fits', verify='no')
iraf.wmef('B9V_blackbody_sci.fits,B9V_blackbody_var.fits,B9V_blackbody_dq.fits',\
          'B9V_blackbody.fits')

iraf.fxhead('B9V_blackbody.fits')

Use the clean telluric to correct the science spectrum

We will try two approaches:

  1. nstelluric approach
  2. manual approach

Spoiler warning: The nstelluric approach gives significantly better results (less noisy and more accurate continuum shape). See the Discussion section below, after the Manual approach section.

The nstelluric approach

nstelluric normalize the telluric and the science, and then cross-correlate the two to find first the shift in pixel between the two, and then the necessary scaling factor. Part of the scaling factor applied to the telluric is the airmass ratio. Then, the shifted and scaled normalized telluric is divided from the non-normalized science.

Procedure
Fit the continuum of the telluric and the science to scale the two to each other. Use only the H-band and K-band regions, ignore the regions where the atmosphere is opaque. Use the s option in nstelluric to define the regions to use (ie. the "sample"), or :sample to enter the regions precisely.

Run this interactive task in a PyRAF session. See screenshots below to get an idea of what it should look like.

iraf.imdelete('axtfobj_comb.fits', verify='no')
iraf.nstelluric('xtfobj_comb.fits', 'tel_clean', fitorder=12, \
    threshold=0.01, fl_inter='yes')

The order used to fit the telluric is: 4
The order used to fit the science is: 4 # same as for the telluric.
Scale: 1
Shift: -1.94
Sample: 19:62 311:643 1054:1540

Given that the arc solution is different from the science and for the telluric, a slight shift is probably expected.

Fitting the telluric:
<br > Fitting the science target:
<br > The next window is for adjusting the cross-correlation solution, if needed. In this case, after a bit of exploration of the parameter space, we find that the automatic solution is the best. A good area to focus on is the $2\mu$ region. <br > Zooming on the $2\mu$ region. See how the strong telluric features gets cleanly canceled. <br > <br >

iraf.splot('axtfobj_comb.fits[SCI,1]', ymin=-200, ymax=1000)

iraf.specplot('xtfobj_comb.fits[sci,1],axtfobj_comb.fits[sci,1],\
    tel_clean.fits[sci,1]', fraction=0.05, yscale='yes', ymin=-100,\
    ymax=4000)

The telluric spectrum, ie. the spectrum of a star, was divided into the science spectrum. This means that to recover the correct continuum for the science spectrum one needs to multiply the science spectrum with a blackbody (if rmfeature was used), or a stellar model (if rmfeature was not used.)


In [ ]:
iraf.imdelete('axtfobj_bb.fits', verify='no')
iraf.gemarith('axtfobj_comb.fits','*', 'B9V_blackbody.fits', \
              'axtfobj_bb.fits', fl_vardq='yes')

In [ ]:
iraf.splot('axtfobj_bb.fits[sci,1]', ymin=-20000, ymax=300000)

During the extraction, some 2D WCS are not being cleaned up correctly. Here is how to remove the 2D info from what is now a 1D spectrum.


In [ ]:
iraf.hedit('axtfobj_bb.fits[sci,1]', 'CD2_2', delete='yes',\
            verify='no',update='yes')
iraf.hedit('axtfobj_bb.fits[sci,1]', 'CTYPE2', delete='yes',\
           verify='no',update='yes')
iraf.hedit('axtfobj_bb.fits[sci,1]', 'LTM2_2', delete='yes',\
           verify='no',update='yes')
iraf.hedit('axtfobj_bb.fits[var,1]', 'CD2_2', delete='yes',\
           verify='no',update='yes')
iraf.hedit('axtfobj_bb.fits[var,1]', 'CTYPE2', delete='yes',\
           verify='no',update='yes')
iraf.hedit('axtfobj_bb.fits[var,1]', 'LTM2_2', delete='yes',\
           verify='no',update='yes')

Using Kathleen's Python splot in a shell, plot the final spectrum with line identifcation and with an overplot of the error.


In [ ]:
!splot axtfobj_bb.fits -l quasar -z 0.6128 -y -25000 250000 --variance 2

The manual approach

Here we do not do any shifting or scaling or airmass correction. We divide by the telluric standard and multiply by the blackbody.


In [ ]:
iraf.imdelete('xtfobj_telrm.fits', verify='no')
iraf.gemarith('xtfobj_comb.fits','/','tel_clean.fits', \
              'xtfobj_telrm.fits', fl_vardq='yes')

In [ ]:
iraf.splot('xtfobj_telrm.fits[sci,1]', ymin=-0.01, ymax=0.045)

In [ ]:
iraf.imdelete('xtfobj_telbb.fits', verify='no')
iraf.gemarith('xtfobj_telrm.fits', '*', 'B9V_blackbody.fits', \
              'xtfobj_telbb.fits', fl_vardq='yes')

In [ ]:
iraf.splot('xtfobj_telbb.fits[sci,1]', ymin=-2.5, ymax=17.5)


In [ ]:
iraf.hedit('xtfobj_telbb.fits[sci,1]', 'CD2_2', delete='yes',\
            verify='no',update='yes')
iraf.hedit('xtfobj_telbb.fits[sci,1]', 'CTYPE2', delete='yes',\
           verify='no',update='yes')
iraf.hedit('xtfobj_telbb.fits[sci,1]', 'LTM2_2', delete='yes',\
           verify='no',update='yes')
iraf.hedit('xtfobj_telbb.fits[var,1]', 'CD2_2', delete='yes',\
           verify='no',update='yes')
iraf.hedit('xtfobj_telbb.fits[var,1]', 'CTYPE2', delete='yes',\
           verify='no',update='yes')
iraf.hedit('xtfobj_telbb.fits[var,1]', 'LTM2_2', delete='yes',\
           verify='no',update='yes')

In [ ]:
!splot xtfobj_telbb.fits -l quasar -z 0.6128 -y -2 15 --variance 2

The final plot in the manual method is a lot more noisy and the slope of the continuum is shallower. One can make the spectrum less noisy by applying the shift found in nstelluric, and the slope can be made steeper by applying the airmass correction nstelluric used. This will be left as an exercise... (I, Kathleen, did all this already to convince myself that the nstelluric results were correct.)

Discussion on telluric correction techniques

Doing the telluric correction manually is suited for cases where the telluric standard was observed very close in time with the science to avoid time-dependent sky variations and at the same airmass, and obviously where nothing in the optical path moved and flexure is minimal.

In the present case, the optics moved, the telluric was observed 50 min after the science target, and at an airmass of 1.848 when the science was observed at airmass 1.465. Clearly, this is a messy observation.

The movements in the optics (or flexure) causes the telluric to be offset from the science spectrum as clearly shown during the wavelength calibration. This was also picked up by nstelluric which found that a shift of -1.94 pixel is necessary.

The large airmass difference is corrected for in nstelluric. But in the manual approach, that difference is not taken into account (one could, but we didn't above). The result was that the non-airmass corrected spectrum showed a much shallower continuum slope.

At least, it seems that the atmosphere was fairly stable since even if the data was taken 50 minutes later, nstelluric didn't have to apply a scaling factor (other than the airmass correction factor).

What was found here is that the nstelluric task, when used correctly does produce rather good results that are far less noisy than the results from the manual approach, especially when there are issues with telluric standard observations. With enough fiddling with shift, airmass correction, scaling, one should be able to reproduce the nstelluric result but it seems more straightforward with nstelluric. I (Kathleen) did play around and I got close, close enough to convince myself that the slope was okay, but never got the noise to go down to the nstelluric level, then I got bored.

FINAL spectrum: axtfobj_bb.fits